Moral preferences

نویسنده

  • Francesca Rossi
چکیده

How do humans or machines make a decision? Whenever we make a decision, we consider our preferences over the possible options. Also, in a social context, collective decisions are made by aggregating the preferences of the individuals. AI systems that support individual and collective decision making have been studied for a long time, and several preference modelling and reasoning frameworks have been defined and exploited in order to provide rationality to the decision process and its result. However, little effort has been devoted to understand whether this decision process, or its result, is ethical or moral. Rationality does not imply morality. How can we embed morality into a decision process? And how do we ensure that the decision we make, as an individual or a collectivity of individuals, are moral? In other words, how do we pass from the individuals’ personal preferences to moral behaviour and decision making? When we pass from humans to AI systems, the task of modelling and embedding morality and ethical principles is even more vague and elusive. Are the existing ethical theories applicable also to AI systems? On one hand, things seem easier since we can narrow the scope of an AI system, so that the contextual information can help us in define the correct moral values it should work according to. However, it is not clear what moral values we should embed in the system, nor how to embed them. Should we code them in a set of rules, or should we let the system learn the values by observing us humans? Preferences and ethical theories are not that different in one respect: they both define priorities over actions. So, can we use existing preference formalisms to also model ethical theories? We discuss how to exploit and adapt current preference formalisms in order to model morality and ethics theories, as well as the dynamic integration of moral code into personal preferences. We also discuss the use of meta-preferences, since morality seems to need a way to judge preferences according to their morality level. It is imperative that we build intelligent systems which behave morally. To work and live with us, we need to trust such systems, and this requires that we are ”reasonably” sure that it behaves morally, according to values that are aligned to the

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Aggregating Moral Preferences

Preference-aggregation problems arise in various contexts. One such context, little explored by social choice theorists, is metaethical. ‘Idealadvisor’ accounts, which have played a major role in metaethics, propose that moral facts are constituted by the idealized preferences of a community of advisors. Such accounts give rise to a preference-aggregation problem: namely, aggregating the adviso...

متن کامل

Double Standards: Social Preferences and Moral Biases

Abstract A consensus seems to be emerging in economics that at least three motives are at work in many strategic decisions: distributive preferences, reciprocal preferences and self-interest. An important obstacle to this research, however, has been moral biases, i.e., the distortions created by self-interest that can obscure social preferences. Among other things, this has led to disagreement ...

متن کامل

16 State - Dependent Utility and Decision Theory

1 Technical Summary 9 2 Introduction, Retrospect and Preview 11 2.1 Retrospect: Theory 11 2.2 Retrospect: Applications and Moral Hazard 13 2.3 One-Person Games with Moral Hazard 15 2.4 Motivation and Organisation 16 3 A General Framework 18 4 Games Against Nature 20 5 Hypothetical Preferences 22 6 Games with Moral Hazard 27 7 Conditional Expected Utility 34 7.1 Representation Theorem 34 7.2 Ext...

متن کامل

Team Incentives under Moral and Altruistic Preferences: Which Team to Choose?

This paper studies incentives provision when agents are characterized either by homo moralis preferences, i.e., their utility is represented by a convex combination of selfish preferences and Kantian morality, or by altruism. In a moral hazard in a team setting with two agents whose efforts affect output stochastically, I demonstrate that the power of extrinsic incentives decreases with the deg...

متن کامل

Running Head: MORAL OBJECTIVISM AND SOCIAL PREFERENCES 1 Can Only One Person Be Right? The Development of Objectivism and Social Preferences Regarding Widely Shared and Controversial Moral Beliefs

Prior work has established that children and adults distinguish moral norms (e.g., hitting is wrong) from conventional norms (e.g., wearing pajamas to school is wrong). Specifically, moral norms are generally perceived as universal across time and space, similar to objective facts. We examined preschoolers’ and adults’ perceptions of moral beliefs alongside facts and opinions by asking whether ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016